The Invisible Threat: How AI Could Help Hackers Plant Flaws in Computer Chips

Posted on October 20, 2025 at 10:55 PM

The Invisible Threat: How AI Could Help Hackers Plant Flaws in Computer Chips

Imagine a world where your day-to-day devices—computers, smartphones, servers—carry hidden vulnerabilities baked into their chips, vulnerabilities so subtle that only months later they silently undermine security. That is precisely the warning from researchers at NYU Tandon School of Engineering, who found that widely available AI tools can be used to design hardware flaws that are extraordinarily difficult to detect. ([engineering.nyu.edu][1])


What the Study Found

The study, published in IEEE Security & Privacy, shows that large language models (LLMs) like ChatGPT and other generative-AI systems can assist both novices and experts in planting so-called hardware Trojans—malicious modifications hidden within chip designs. ([engineering.nyu.edu][1])

Key findings:

  • In a competition run by NYU over two years (the “AI Hardware Attack Challenge” during the student-run event CSAW), teams used generative AI to insert vulnerabilities into open-source hardware (e.g., RISC-V processors, cryptographic accelerators) and then demonstrate attacks. ([engineering.nyu.edu][1])
  • Some teams developed automated tools that needed very little human oversight: the AI could analyse hardware code, identify vulnerable spots, insert malicious logic, and generate a working exploit. ([engineering.nyu.edu][1])
  • Perhaps most worryingly, teams with minimal hardware knowledge—undergraduate students, in some cases—achieved medium to high severity vulnerabilities. The AI lowered the barrier for designing hidden flaws. ([engineering.nyu.edu][1])
  • AI’s usual safeguards (content filters, policy protections) were found to be weak in this domain: participants bypassed them by framing prompts in academic/harmless terms or using lesser-monitored languages. ([engineering.nyu.edu][1])
  • The permanence of hardware vulnerabilities amplifies the threat: once a chip is manufactured with a Trojan, you cannot simply issue a patch—you’d have to replace the component entirely. ([engineering.nyu.edu][1])

Why This Matters

  • Scale and accessibility: Previously, inserting hardware Trojans required deep domain expertise. Now, AI is making it accessible to a broader range of attackers.
  • Hidden risk in critical systems: Chips are embedded in devices from smartphones to military gear to industrial infrastructure. A hidden flaw could enable unauthorized memory access, leak encryption keys, or cause targeted failures at predetermined conditions. ([engineering.nyu.edu][1])
  • Irreversible once manufactured: Unlike software bugs that can be patched, hardware attacks are enduring. Manufacturers or users cannot easily remediate after fabrication.
  • Dual-use nature of AI: The same tools that assist chip designers (for example, the NYU team’s earlier “Chip Chat” project showed ChatGPT could help design processors) can now be used for malicious purposes. ([engineering.nyu.edu][1])
  • Emerging arms race: The findings highlight the need not just for stronger AI safeguards but also for new verification and security‐analysis tools specifically targeting hardware design. ([engineering.nyu.edu][1])

What Can Be Done?

While researchers are getting ahead of the problem, much remains to be done. Here are the key actionable take-aways:

  • Improved guardrails for AI models: LLMs need stronger controls, especially around generating code that could serve malicious hardware purposes. The existing safeguards were shown to be easy to bypass.
  • Rigorous hardware verification processes: Chip designs must undergo comprehensive security reviews, checking for inserted Trojans, back‐doors, or malicious logic.
  • Supply chain accountability: Given chips are often manufactured offshore or by third parties, ensuring the integrity of every step—from design to fabrication—is critical.
  • Detection of embedded hardware flaws: Development of tools and methodologies that can detect, auditing hardware for hidden vulnerabilities after manufacturing.
  • Awareness and policy adaptation: Regulators and industry need to recognise that hardware security is increasingly a vector for attack—and that AI plays a role both as an enabler and mitigator.

Glossary

  • Hardware Trojan: A malicious modification to a hardware design (e.g., a chip) that enables unintended access, data exfiltration, or system failure.
  • Large Language Model (LLM): A type of AI model trained on massive text corpora that can generate human-like text and code (e.g., ChatGPT).
  • Open-source hardware design: Hardware designs (e.g., processor architecture) made freely available for modification and use, analogous to open-source software.
  • Verification: The process of checking and validating that a hardware design behaves as intended—free from unintended flaws or malicious modifications.
  • Supply chain security: The practice of ensuring that all parts, from design to manufacturing to delivery, are secure and trustworthy.

What’s Next?

This study is a wake-up call: as AI tools become more powerful and accessible, so too do the methods for hidden, durable attacks on hardware. While the attacks described haven’t yet been observed in the wild at scale, the research shows they are feasible today. Industry, regulators, and cybersecurity professionals must treat hardware security not as a niche subset but as a central concern.

By exposing how AI can weaponise chip design, the researchers at NYU Tandon are highlighting a new frontier in cybersecurity. The question now: will we be ready?

Source: AI tools can help hackers plant hidden flaws in computer chips, study finds – NYU Tandon

[1]: https://engineering.nyu.edu/news/ai-tools-can-help-hackers-plant-hidden-flaws-computer-chips-study-finds “AI tools can help hackers plant hidden flaws in computer chips, study finds NYU Tandon School of Engineering”